Goto

Collaborating Authors

 rogue state


Rogue states could use AI to do 'real harm', warns ex-Google CEO

The Guardian

Google's former chief executive has warned that artificial intelligence could be used by rogue states such as North Korea, Iran and Russia to "harm innocent people". Eric Schmidt, who held senior posts at Google from 2001 to 2017, told BBC Radio 4's Today programme that those countries and terrorists could adopt and misuse the technology to develop weapons to create "a bad biological attack from some evil person". The tech billionaire said: "The real fears that I have are not the ones that most people talk about AI – I talk about extreme risk. "Think about North Korea, or Iran, or even Russia, who have some evil goal. This technology is fast enough for them to adopt that they could misuse it and do real harm."


Artificial Intelligence Raises Real Risks from Malicious Hackers, Rogue States

#artificialintelligence

Rapid advances in artificial intelligence are raising risks that malicious users will soon exploit the technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns. The study, published on Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers. The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years. "We all agree there are a lot of positive applications of AI," Miles Brundage, a research fellow at Oxford's Future of Humanity Institute. "There was a gap in the literature around the issue of malicious use."


Quantum Computing Is the Next Big Security Risk

WIRED

The 20th century gave birth to the Nuclear Age as the power of the atom was harnessed and unleashed. Today, we are on the cusp of an equally momentous and irrevocable breakthrough: the advent of computers that draw their computational capability from quantum mechanics. US representative Will Hurd (R-Texas) (@HurdOnTheHill) chairs the Information Technology Subcommittee of the Committee on Oversight and Government Reform and serves on the Committee on Homeland Security and the Permanent Select Committee on Intelligence. The potential benefits of mastering quantum computing, from advances in cancer research to unlocking the mysteries of the universe, are limitless. But that same computing power can be used to unlock different kinds of secrets--from your personal financial or health records, to corporate research projects and classified government intelligence.


Terrorists 'certain' to get their hands on killer robots

Daily Mail - Science & tech

Parliament has been warned that terrorists and rogue states will get their hands on killer robots in the next few years. Academics and senior scientists warned the'genie is out the bottle' when it comes to the use of artificial intelligence (AI) on the battlefield. And now AI experts also warned the House of Lords inquiry this week that terrorists are now looking to hijack self-driving cars to mow down innocent people in a copycat Westminster Bridge-style attack. Autonomous weapons which can pull the trigger without the use of human control are already being developed, experts warned. Alvin Wilby, of French defence firm Thales, said it is an'absolute certainty in the very near future' that rogue states will be able to get their hands on robot arms.


I'm a pacifist, so why don't I support the Campaign to Stop Killer Robots?

The Guardian

The Campaign to Stop Killer Robots has called on the UN to ban the development and use of autonomous weapons: those that can identify, track and attack targets without meaningful human oversight. On Monday, the group released a sensationalist video, supported by some prominent artificial intelligence researchers, depicting a dystopian future in which such machines run wild. I am gratified that my colleagues are volunteering their efforts to ensure beneficial uses of artificial intelligence (AI) technology. But I am unconvinced of the effectiveness of the campaign beyond a symbolic gesture. Even though I identify myself strongly as a pacifist, I have reservations about signing up to the proposed ban.